96 research outputs found

    Faults in Linux 2.6

    Get PDF
    In August 2011, Linux entered its third decade. Ten years before, Chou et al. published a study of faults found by applying a static analyzer to Linux versions 1.0 through 2.4.1. A major result of their work was that the drivers directory contained up to 7 times more of certain kinds of faults than other directories. This result inspired numerous efforts on improving the reliability of driver code. Today, Linux is used in a wider range of environments, provides a wider range of services, and has adopted a new development and release model. What has been the impact of these changes on code quality? To answer this question, we have transported Chou et al.'s experiments to all versions of Linux 2.6; released between 2003 and 2011. We find that Linux has more than doubled in size during this period, but the number of faults per line of code has been decreasing. Moreover, the fault rate of drivers is now below that of other directories, such as arch. These results can guide further development and research efforts for the decade to come. To allow updating these results as Linux evolves, we define our experimental protocol and make our checkers available

    Clang and Coccinelle: Synergising program analysis tools for CERT C Secure Coding Standard certification

    Get PDF
    Writing correct C programs is well-known to be hard, not least due to the many language features intrinsic to C. Writing secure C programs is even harder and, at times, seemingly impossible. To improve on this situation the US CERT has developed and published a set of coding standards, the “CERT C Secure Coding Standard”, that (in the current version) enumerates 118 rules and 182 recommendations with the aim of making C programs (more) secure. The large number of rules and recommendations makes automated tool support essential for certifying that a given system is in compliance with the standard. In this paper we report on ongoing work on integrating two state of the art analysis tools, Clang and Coccinelle, into a combined tool well suited for analysing and certifying C programs according to, e.g., the CERT C Secure Coding standard or the MISRA (the Motor Industry Software Reliability Assocation) C standard. We further argue that such a tool must be highly adaptable and customisable to each software project as well as to the certification rules required by a given standard. Clang is the C frontend for the LLVM compiler/virtual machine project which includes a comprehensive set of static analyses and code checkers. Coccinelle is a program transformation tool and bug-finder developed originally for the Linux kernel, but has been successfully used to find bugs in other Open Source projects such as WINE and OpenSSL

    On the Effectiveness of Information Retrieval Based Bug Localization for C Programs

    Get PDF
    International audienceLocalizing bugs is important, difficult, and expensive, especially for large software projects. To address this problem, information retrieval (IR) based bug localization has increasingly been used to suggest potential buggy files given a bug report. To date, researchers have proposed a number of IR techniques for bug localization and empirically evaluated them to understand their effectiveness. However, virtually all of the evaluations have been limited to the projects written in object-oriented programming languages, particularly Java. Therefore, the effectiveness of these techniques for other widely-used languages such as C is still unknown. In this paper, we create a benchmark dataset consisting of more than 7,500 bug reports from five popular C projects and rigorously evaluate our recently introduced IR-based bug localization tool using this dataset. Our results indicate that although the IR-relevant properties of C and Java programs are different, IR-based bug localization in C software at the file level is overall as effective as in Java software. However, we also find that the recent advance of using program structure information in performing bug localization gives less of a benefit for C software than for Java software

    Efficient locking for multicore architectures

    Get PDF
    The scalability of multithreaded applications on current multicore systems is hampered by the performance of critical sections, due in particular to the costs of access contention and cache misses. In this paper, we propose a new locking technique, Remote Core Locking (RCL) that aims to improve the performance of critical sections in legacy applications on multicore architectures. The idea of RCL is to replace lock acquisitions by optimized remote procedure calls to a dedicated server core. RCL limits the performance collapse observed with regular locks when many threads try to acquire a lock concurrently and removes the need to transfer lock-protected shared data to the core acquiring the lock: such data can typically remain in the server core's cache. Our microbenchmark shows that under high contention, RCL is always more efficient than the other state-of-the-art lock mechanisms, and a preliminary macrobenchmark evaluation shows performance gains on SPLASH-2 benchmarks (speedup up to 4.85) and on the Web cacheapplication memcached (speedup up to 2.62).L'extensibilité des applications parallèles sur les architectures multicoeurs modernes est limitée par la performance des sections critiques, pour des raisons de contention sur le bus et de défauts de cache. Dans cet article, nous proposons une nouvelle approche pour l'implémentation des verrous, appelée Verrou À Distance (VAD), qui permet d'améliorer la performance des applications patrimoniales sur les architectures multicoeurs. L'idée du VAD est de remplacer les acquisitions de verrous par des appels de procédures à distance vers un ou plusieurs coeurs dédiés. Le VAD permet de limiter l'effet d'effondrement des performances observé avec les verrous classiques lorsque de nombreux fils d'exécution tentent d'acquérir simultanément un verrou. Le VAD évite également le transfert des données protégées par le verrou vers le coeur qui en fait l'acquisition. De fait, ces données restent dans le cache du coeur serveur. Sous haute contention, nos micro-évaluations montre que le VAD est toujours plus performant que l'état de l'art en matière de verrou. Sur des applications patrimoniales, nos expérimentations montrent un gain en performance pouvant aller jusqu'à 4.85 sur le banc d'essai SPLASH-2 et jusqu'à 2.62 sur le cache Web memcached

    Diagnosys: Automatic Generation of a Debugging Interface to the Linux Kernel

    Get PDF
    Best Paper awardInternational audienceThe Linux kernel does not export a stable, well-defined kernel interface, complicating the development of kernel-level services, such as device drivers and file systems. While there does exist a set of functions that are exported to external modules, this set of functions frequently changes, and the functions have implicit, ill-documented preconditions. No specific debugging support is provided. We present \textit{Diagnosys}, an approach to automatically constructing a debugging interface for the Linux kernel. First, a designated kernel maintainer ses Diagnosys to identify constraints on the use of the exported functions. Based on this information, developers of kernel services can then use Diagnosys to generate a debugging interface specialized to their code. When a service including this interface is tested, it records information about potential problems. This information is preserved following a kernel crash or hang. Our experiments show that the generated debugging interface provides useful log information and incurs a low performance penalty

    Preventing Memory and Information LeakageIncinerator – Eliminating Stale References in Dynamic OSGi Applications

    Get PDF
    International audienceJava class loaders are commonly used in applicationservers to load, unload and update a set of classes as a unit.However, unloading or updating a class loader can introducestale references to the objects of the outdated class loader. Astale reference leads to a memory leak and, for an update,to an inconsistency between the outdated classes and theirreplacements. To detect and eliminate stale references, we proposeIncinerator, a Java virtual machine extension that introducesthe notion of an outdated class loader. Incinerator detects stalereferences and sets them to null during a garbage collection cycle.We evaluate Incinerator in the context of the OSGi frameworkand show that Incinerator correctly detects and eliminates stalereferences, including a bug in Knopflerfish. We also evaluatethe performance of Incinerator with the DaCapo benchmark onVMKit and show that Incinerator has an overhead of at most3.3%

    Bridging the Gap between Legacy Services and Web Services

    Get PDF
    International audienceWeb Services is an increasingly used instantiation of Service-Oriented Architectures (SOA) that relies on standard Internet protocols to produce services that are highly interoperable. Other types of services, relying on legacy application layer protocols, however, cannot be composed directly. A promising solution is to implement wrappers to translate between the application layer protocols and the WS protocol. Doing so manually, however, requires a high level of expertise, in the relevant application layer protocols, in low-level network and system programming, and in the Web Service paradigm itself. In this paper, we introduce a generative language based approach for constructing wrappers to facilitate the migration of legacy service functionalities to Web Services. To this end, we have designed the Janus domain-specific language, which provides developers with a high-level way to describe the operations that are required to encapsulate legacy service functionalities. We have successfully used Janus to develop a number of wrappers, including wrappers for IMAP and SMTP servers, for a RTSP-compliant media server and for UPnP service discovery. Preliminary experiments show that Janus-based WS wrappers have performance comparable to manually written wrappers

    Towards Bridging the Gap Between Programming Languages and Partial Evaluation

    Get PDF
    International audiencePartial evaluation is a program-transformation technique that automatically specializes a program with respect to user-supplied invariants. Despite successful applications in areas such as graphics, operating systems, and software engineering, partial evaluators have yet to achieve widespread use. One reason is the difficulty of adequately describing specialization opportunities. Indeed, under-specialization or over-specialization often occurs, without any direct feedback to the user as to the source of the problem. We have developed a high-level, module-based language allowing the programmer to guide the choice of both the code to specialize and the invariants to exploit during the specialization process. To ease the use of partial evaluation, the syntax of this language is similar to the declaration syntax of the target language of the partial evaluator. To provide feedback to the programmer, declarations are checked throughout the analyses performed by partial evaluation. The language has been successfully used by a signal-processing expert in the design of a specializable Forward Error Correction component
    • …
    corecore